super-intelligent ai
OpenAI's Ilya Sutskever Has a Plan for Keeping Super-Intelligent AI in Check
OpenAI was founded on a promise to build artificial intelligence that benefits all of humanity--even when that AI becomes considerably smarter than its creators. Since the debut of ChatGPT last year and during the company's recent governance crisis, its commercial ambitions have been more prominent. Now, the company says a new research group working on wrangling the super-smart AIs of the future is starting to bear fruit. "AGI is very fast approaching," says Leopold Aschenbrenner, a researcher at OpenAI involved with the Superalignment research team established in July. "We're gonna see superhuman models, they're gonna have vast capabilities and they could be very, very dangerous, and we don't yet have the methods to control them."
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.95)
The three types of Artificial Intelligence: a glimpse into the future
Whether in process automation, healthcare, consumer assistance, autonomous driving, or many other applications, AI is already transforming many areas of our daily lives. However, to maximize the benefits and minimize the risks of AI, it is important to understand its main types and future prospects. Artificial Intelligence (AI) is the term used to describe the ability of a machine to perform cognitive processes. Currently, AI encompasses a wide range of computer programs capable of performing tasks similar to human cognition, such as learning, vision, logical reasoning, and more. Today, AI is widely used by companies and consumers due to its many advantages.
- Information Technology > Artificial Intelligence > Cognitive Science (0.92)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.50)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.35)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.32)
Containment algorithms won't stop super-intelligent AI, scientists warn
A team of computer scientists has used theoretical calculations to argue that algorithms could not control a super-intelligent AI. Their study addresses what Oxford philosopher Nick Bostrom calls the control problem: how do we ensure super-intelligence machines act in our interests? The researchers conceived of a theoretical containment algorithm that would resolve this problem by simulating the AI's behavior, and halting the program if its actions became harmful. If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.
There's a Damn Good Chance AI Will Destroy Humanity, Researchers Say
In new research, scientists tackle one of our greatest future fears head-on: What happens when a certain type of advanced, self-directing artificial intelligence (AI) runs into an ambiguity in its programming that affects the real world? Will the AI go haywire and begin trying to turn humans into paperclips, or whatever else is the extreme reductio ad absurdum version of its goal? And, most importantly, how can we prevent it? In their paper, researchers from Oxford University and Australian National University explain a fundamental pain point in the design of AI: "Given a few assumptions, we argue that it will encounter a fundamental ambiguity in the data about its goal. For example, if we provide a large reward to indicate that something about the world is satisfactory to us, it may hypothesize that what satisfied us was the sending of the reward itself; no observation can refute that." The Matrix is an example of a dystopian AI scenario, wherein an AI that seeks to farm resources gathers up most of humanity and pumps the imaginary Matrix into their brains, while extracting their mental resources.
Is artificial intelligence capable of attacking humanity? - study
Will AI be capable of overpowering humanity? Any sci-fi fan can tell you that a good apocalypse begins and ends with artificial intelligence and a god complex. But the main question is how close we really are to bringing aspects like Skynet from the Terminator franchise or Brainiac from DC Comics into the world, and if it is possible to control a being whose intelligence far exceeds that of the brightest and smartest humanity has to offer. A number of scientists, philosophers and technology experts at international educational institutions published an article in which they analyzed the danger, stating among other things that if such an entity were to arise, it would not be possible to stop it. The authors of the study, who come from academic institutions and technological bodies in the US, Australia and Madrid, published their findings in the Journal of Artificial Intelligence Research.
- Oceania > Australia (0.25)
- North America > United States (0.25)
- Europe > Spain > Galicia > Madrid (0.25)
- Education (0.36)
- Law Enforcement & Public Safety (0.31)
- Leisure & Entertainment > Games (0.30)
Researchers Say It'll Be Impossible to Control a Super-Intelligent AI
The idea of artificial intelligence overthrowing humankind has been talked about for decades, and in 2021, scientists delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze (and control). But if we're unable to comprehend it, it's impossible to create such a simulation. Rules such as'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.
Calculations Suggest It'll Be Impossible to Control a Super-Intelligent AI
The idea of artificial intelligence overthrowing humankind has been talked about for many decades, and in January 2021, scientists delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze. But if we're unable to comprehend it, it's impossible to create such a simulation. Rules such as'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the 2021 paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.
SAP BrandVoice: If AI Is Our Future, What Can We Learn From The Past?
The power of AI to solve large-scale problems perhaps met no greater test than COVID-19. Around the world, medical experts have leveraged AI to drastically reduce the time scale of finding and developing a new vaccine to treat the pandemic. "One of the time-consuming pieces is really around the analysis of billions of different molecules and how those might be used to do chemical binding to the target protein that we're looking at," Dan Drapeau, an artificial intelligence (AI) expert and head of technology at Blue Fountain Media, told TechRepublic in August 2020. "Humans can't possibly do that. By November, two vaccines in the U.S. with 95% or greater efficacy were making their way through emergency approval processes. If the approval moves forward, the finding and development of a vaccine for COVID-19 will beat average vaccine development timelines by years. "The development of vaccines can take years," explains the Mayo Clinic website. "This is especially true when the vaccines ...
- Health & Medicine > Therapeutic Area > Vaccines (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
Podcast: Can you teach a machine common sense?
Artificial intelligence has become such a big part of our lives, you'd be forgiven for losing count of the algorithms you interact with. But the AI powering your weather forecast, Instagram filter, or favorite Spotify playlist is a far cry from the hyper-intelligent thinking machines industry pioneers have been musing about for decades. Deep learning, the technology driving the current AI boom, can train machines to become masters at all sorts of tasks. But it can only learn only one at a time. And because most AI models train their skillset on thousands or millions of existing examples, they end up replicating patterns within historical data--including the many bad decisions people have made, like marginalizing people of color and women. Still, systems like the board-game champion AlphaZero and the increasingly convincing fake-text generator GPT-3 have stoked the flames of debate regarding when humans will create an artificial general intelligence--machines that can multitask, think, and reason for themselves. Beyond the answer to how we might develop technologies capable of common sense or self-improvement lies yet another question: who really benefits from the replication of human intelligence in an artificial mind? "Most of the value that's being generated by AI today is returning back to the billion dollar companies that already have a fantastical amount of resources at their disposal," says Karen Hao, MIT Technology Review's senior AI reporter and the writer of The Algorithm. "And we haven't really figured out how to convert that value or distribute that value to other people."
- Leisure & Entertainment > Games (0.34)
- Media > Music (0.34)
Human Compatible: A timely warning on the future of AI
The late Stephen Hawking called artificial intelligence the biggest threat to humanity. But Hawking, albeit a revered physicist, was not a computer scientist. Elon Musk compared AI adoption to "summoning the devil." But Elon is, well, Elon. And there are dozens of movies that depict a future in which robots and artificial intelligence go berserk.